实际上,许多医疗数据集在疾病标签空间上定义了基本的分类学。但是,现有的医学诊断分类算法通常假定具有语义独立的标签。在这项研究中,我们旨在利用深度学习算法来利用类层次结构,以更准确,可靠的皮肤病变识别。我们提出了一个双曲线网络,以共同学习图像嵌入和类原型。事实证明,双曲线为与欧几里得几何形状更好地建模层次关系提供了一个空间。同时,我们使用从类层次结构编码的距离矩阵限制双曲线原型的分布。因此,学习的原型保留了嵌入空间中的语义类关系,我们可以通过将图像特征分配给最近的双曲线类原型来预测图像的标签。我们使用内部皮肤病变数据集,该数据集由65种皮肤疾病的大约230k皮肤镜图像组成,以验证我们的方法。广泛的实验提供了证据表明,与模型相比,我们的模型可以实现更高的准确性,而在不考虑班级关系的情况下可以实现更高的严重分类错误。
translated by 谷歌翻译
近年来,自动化方法迅速发展了皮肤病变和分类的方法。由于此类系统在诊所中的部署越来越多,因此很重要的是,为各种分布(OOD)样品(未知的皮肤病变和状况)开发更强大的系统。但是,当前对皮肤病变分类训练的深度学习模型倾向于将这些OOD样品错误地分类为他们学习的皮肤病变类别之一。为了解决这个问题,我们提出了一种简单而战略的方法,可以改善OOD检测性能,同时维持已知皮肤病变类别的多类分类精度。要说明,这种方法建立在皮肤病变图像的长尾且细粒度检测任务的现实情况之上。通过这种方法,1)首先,我们针对中间和尾巴之间的混合,以解决长尾问题。 2)后来,我们将上述混合策略与原型学习结合在一起,以解决数据集的细粒度。本文的独特贡献是两倍,这是通过广泛的实验证明的。首先,我们提出了针对皮肤病变的OOD任务的现实问题。其次,我们提出了一种针对问题设置的长尾且细粒度方面的方法,以提高OOD性能。
translated by 谷歌翻译
我们介绍了一种声音和完整的算法,称为迭代因果发现(ICD),用于在存在潜在混杂器和选择偏压的情况下恢复因果图。 ICD依赖于因果性马尔可夫和忠诚的假设,并恢复潜在因果图的等价类别。它以完整的图形开始,由单个迭代阶段组成,通过识别连接节点之间的条件独立性(CI)逐渐改进该图。任何迭代后都需要的独立性和因果关系是正确的,随时渲染ICD。基本上,我们将CI调节的大小与测试节点绑定到图表上的距离,并在连续迭代中提高此值。因此,每次迭代都改进了通过具有较小调节集的先前迭代恢复的图 - 一种更高的统计功率 - 这有助于稳定性。我们凭经验证明ICD需要较少的CI测试,并与FCI,FCI +和RFCI算法相比,学习更准确的因果图。
translated by 谷歌翻译
分布式平均值估计(DME)是联邦学习中的一个中央构建块,客户将本地梯度发送到参数服务器,以平均和更新模型。由于通信限制,客户经常使用有损压缩技术来压缩梯度,从而导致估计不准确。当客户拥有多种网络条件(例如限制的通信预算和数据包损失)时,DME更具挑战性。在这种情况下,DME技术通常会导致估计误差显着增加,从而导致学习绩效退化。在这项工作中,我们提出了一种名为Eden的强大DME技术,该技术自然会处理异质通信预算和数据包损失。我们为伊甸园提供了有吸引力的理论保证,并通过经验进行评估。我们的结果表明,伊甸园对最先进的DME技术持续改进。
translated by 谷歌翻译
我们考虑使用$ D $ -dimimentional的Real值矢量使用$ d(1 + o(1))$位,每个问题都以允许接收器大致重建其平均值的方式传输$ D $ -dimimentional Valued Vipors。在分布式和联合学习中自然地出现这种压缩问题。我们提供了新颖的数学结果,并导出了比以前的压缩技术更准确的计算高效算法。我们使用各种数据集来评估我们在分布式和联合学习任务的集合上的方法,并显示出对现有技术的一致性改进。
translated by 谷歌翻译
KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks. However, we show that KL-regularized reinforcement learning with behavioral reference policies derived from expert demonstrations can suffer from pathological training dynamics that can lead to slow, unstable, and suboptimal online learning. We show empirically that the pathology occurs for commonly chosen behavioral policy classes and demonstrate its impact on sample efficiency and online policy performance. Finally, we show that the pathology can be remedied by non-parametric behavioral reference policies and that this allows KL-regularized reinforcement learning to significantly outperform state-of-the-art approaches on a variety of challenging locomotion and dexterous hand manipulation tasks.
translated by 谷歌翻译
State-of-the-art language models are often accurate on many question-answering benchmarks with well-defined questions. Yet, in real settings questions are often unanswerable without asking the user for clarifying information. We show that current SotA models often do not ask the user for clarification when presented with imprecise questions and instead provide incorrect answers or "hallucinate". To address this, we introduce CLAM, a framework that first uses the model to detect ambiguous questions, and if an ambiguous question is detected, prompts the model to ask the user for clarification. Furthermore, we show how to construct a scalable and cost-effective automatic evaluation protocol using an oracle language model with privileged information to provide clarifying information. We show that our method achieves a 20.15 percentage point accuracy improvement over SotA on a novel ambiguous question-answering answering data set derived from TriviaQA.
translated by 谷歌翻译
Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method supports existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs.
translated by 谷歌翻译
Learned classifiers should often possess certain invariance properties meant to encourage fairness, robustness, or out-of-distribution generalization. However, multiple recent works empirically demonstrate that common invariance-inducing regularizers are ineffective in the over-parameterized regime, in which classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of ``benign overfitting," in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable. In this work we provide a theoretical justification for these observations. We prove that -- even in the simplest of settings -- any interpolating learning rule (with arbitrarily small margin) will not satisfy these invariance properties. We then propose and analyze an algorithm that -- in the same setting -- successfully learns a non-interpolating classifier that is provably invariant. We validate our theoretical observations on simulated data and the Waterbirds dataset.
translated by 谷歌翻译
Comparing Bayesian neural networks (BNNs) with different widths is challenging because, as the width increases, multiple model properties change simultaneously, and, inference in the finite-width case is intractable. In this work, we empirically compare finite- and infinite-width BNNs, and provide quantitative and qualitative explanations for their performance difference. We find that when the model is mis-specified, increasing width can hurt BNN performance. In these cases, we provide evidence that finite-width BNNs generalize better partially due to the properties of their frequency spectrum that allows them to adapt under model mismatch.
translated by 谷歌翻译